experiment result
ICAS: Detecting Training Data from Autoregressive Image Generative Models
Yu, Hongyao, Qiu, Yixiang, Yang, Yiheng, Fang, Hao, Zhuang, Tianqu, Hong, Jiaxin, Chen, Bin, Wu, Hao, Xia, Shu-Tao
Autoregressive image generation has witnessed rapid advancements, with prominent models such as scale-wise visual auto-regression pushing the boundaries of visual synthesis. However, these developments also raise significant concerns regarding data privacy and copyright. In response, training data detection has emerged as a critical task for identifying unauthorized data usage in model training. To better understand the vulnerability of autoregressive image generative models to such detection, we conduct the first study applying membership inference to this domain. Our approach comprises two key components: implicit classification and an adaptive score aggregation strategy. First, we compute the implicit token-wise classification score within the query image. Then we propose an adaptive score aggregation strategy to acquire a final score, which places greater emphasis on the tokens with lower scores. A higher final score indicates that the sample is more likely to be involved in the training set. To validate the effectiveness of our method, we adapt existing detection algorithms originally designed for LLMs to visual autoregressive models. Extensive experiments demonstrate the superiority of our method in both class-conditional and text-to-image scenarios. Moreover, our approach exhibits strong robustness and generalization under various data transformations. Furthermore, sufficient experiments suggest two novel key findings: (1) A linear scaling law on membership inference, exposing the vulnerability of large foundation models. (2) Training data from scale-wise visual autoregressive models is easier to detect than other autoregressive paradigms. Our code is available at https://github.com/Chrisqcwx/ImageAR-MIA.
- Asia > China > Guangdong Province > Shenzhen (0.06)
- Europe > Ireland > Leinster > County Dublin > Dublin (0.06)
- Asia > China > Heilongjiang Province > Harbin (0.04)
- (2 more...)
- Information Technology > Artificial Intelligence > Machine Learning > Performance Analysis > Accuracy (0.94)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks (0.68)
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (0.67)
- Information Technology > Artificial Intelligence > Natural Language > Generation (0.63)
MCN-CL: Multimodal Cross-Attention Network and Contrastive Learning for Multimodal Emotion Recognition
Multimodal emotion recognition plays a key role in many domains, including mental health monitoring, educational interaction, and human-computer interaction. However, existing methods often face three major challenges: unbalanced category distribution, the complexity of dynamic facial action unit time modeling, and the difficulty of feature fusion due to modal heterogeneity. With the explosive growth of multimodal data in social media scenarios, the need for building an efficient cross-modal fusion framework for emotion recognition is becoming increasingly urgent. To this end, this paper proposes Multimodal Cross-Attention Network and Contrastive Learning (MCN-CL) for mul-timodal emotion recognition. It uses a triple query mechanism and hard negative mining strategy to remove feature redundancy while preserving important emotional cues, effectively addressing the issues of modal heterogeneity and category imbalance. Experiment results on the IEMO-CAP and MELD datasets show that our proposed method outperforms state-of-the-art approaches, with Weighted F1 scores improving by 3.42% and 5.73%, respectively.
- Asia > China > Beijing > Beijing (0.04)
- Europe > Switzerland > Basel-City > Basel (0.04)
c96c08f8bb7960e11a1239352a479053-AuthorFeedback.pdf
We appreciate the constructive comments and valuable points raised by the reviewers and the editor. Some expressions in this paper are not proper or brief enough. We will try our best to make the writing better. It is an improvement on loss function without increasing the inference cost. Table 1 in our paper is the results of ablation study.
the related discussions and further experiment results in the new version, shall our paper be accepted
We thank all reviewers for the insightful feedback. Below we address all questions raised in the reviews. More intuition can be added in Section 3. COT could greatly benefit sequential learning. To support our intuition, we provide two arguments in Appendix A.3: the For the justification, please see our response to Reviewer 2. WaveGAN (trained with WGAN-GP loss) and COT -GAN without the mixing trick. We respectfully disagree with the reviewer on this comment.